spike count
Supplementary Material Information Geometry of the Retinal Representation ManifoldXuehao Ding
Further experimental details are described in Ref. [4]. Each spatiotemporal stimulus spanned over 400 ms corresponding to the retinal integration timescale. Figure 1: (a) The log-likelihood of empirical data for each PMF averaged over cells. Black line is the identity line. The central 20 20 arrays are shown.
- Europe > Germany > Lower Saxony > Gottingen (0.14)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (2 more...)
GT-SNT: A Linear-Time Transformer for Large-Scale Graphs via Spiking Node Tokenization
Zhang, Huizhe, Li, Jintang, Zhu, Yuchang, Zhong, Huazhen, Chen, Liang
Graph Transformers (GTs), which integrate message passing and self-attention mechanisms simultaneously, have achieved promising empirical results in graph prediction tasks. However, the design of scalable and topology-aware node tok-enization has lagged behind other modalities. This gap becomes critical as the quadratic complexity of full attention renders them impractical on large-scale graphs. Recently, Spiking Neural Networks (SNNs), as brain-inspired models, provided an energy-saving scheme to convert input intensity into discrete spike-based representations through event-driven spiking neurons. Inspired by these characteristics, we propose a linear-time Graph Transformer with Spiking Node Tokenization (GT -SNT) for node classification. By integrating multi-step feature propagation with SNNs, spiking node tokenization generates compact, locality-aware spike count embeddings as node tokens to avoid predefined code-books and their utilization issues. The codebook guided self-attention leverages these tokens to perform node-to-token attention for linear-time global context aggregation. In experiments, we compare GT -SNT with other state-of-the-art baselines on node classification datasets ranging from small to large. Experimental results show that GT -SNT achieves comparable performances on most datasets and reaches up to 130 faster inference speed compared to other GTs.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > China > Guangdong Province > Shenzhen (0.04)
- Asia > China > Fujian Province > Xiamen (0.04)
SpikingBrain: Spiking Brain-inspired Large Models
Pan, Yuqi, Feng, Yupeng, Zhuang, Jinghao, Ding, Siyu, Xu, Han, Liu, Zehao, Sun, Bohan, Chou, Yuhong, Qiu, Xuerui, Deng, Anlin, Hu, Anjie, Wang, Shurong, Zhou, Peng, Yao, Man, Wu, Jibin, Yang, Jian, Sun, Guoliang, Xu, Bo, Li, Guoqi
Mainstream Transformer-based large language models face major efficiency bottlenecks: training computation scales quadratically with sequence length, and inference memory grows linearly, limiting long-context processing. Building large models on non-NVIDIA platforms also poses challenges for stable and efficient training. To address this, we introduce SpikingBrain, a family of brain-inspired models designed for efficient long-context training and inference. SpikingBrain leverages the MetaX GPU cluster and focuses on three aspects: (1) Model Architecture: linear and hybrid-linear attention architectures with adaptive spiking neurons; (2) Algorithmic Optimizations: an efficient, conversion-based training pipeline and a dedicated spike coding framework; (3) System Engineering: customized training frameworks, operator libraries, and parallelism strategies tailored to MetaX hardware. Using these techniques, we develop two models: SpikingBrain-7B, a linear LLM, and SpikingBrain-76B, a hybrid-linear MoE LLM. These models demonstrate the feasibility of large-scale LLM development on non-NVIDIA platforms, and training remains stable for weeks on hundreds of MetaX GPUs with Model FLOPs Utilization at expected levels. SpikingBrain achieves performance comparable to open-source Transformer baselines while using only about 150B tokens for continual pre-training. Our models also significantly improve long-context efficiency and deliver inference with (partially) constant memory and event-driven spiking behavior. For example, SpikingBrain-7B attains over 100x speedup in Time to First Token for 4M-token sequences. Furthermore, the proposed spiking scheme achieves 69.15 percent sparsity, enabling low-power operation. Overall, this work demonstrates the potential of brain-inspired mechanisms to drive the next generation of efficient and scalable large model design.
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
- Asia > China > Beijing > Beijing (0.04)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- (4 more...)
- Research Report (1.00)
- Workflow (0.93)
- Europe > Italy (0.04)
- South America > Colombia (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
Spiking Neural Networks: The Future of Brain-Inspired Computing
Spiking Neural Networks (SNNs) represent the latest generation of neural computation, offering a brain-inspired alternative to conventional Artificial Neural Networks (ANNs). Unlike ANNs, which depend on continuous-valued signals, SNNs operate using distinct spike events, making them inherently more energy-efficient and temporally dynamic. This study presents a comprehensive analysis of SNN design models, training algorithms, and multi-dimensional performance metrics, including accuracy, energy consumption, latency, spike count, and convergence behavior. Key neuron models such as the Leaky Integrate-and-Fire (LIF) and training strategies, including surrogate gradient descent, ANN-to-SNN conversion, and Spike-Timing Dependent Plasticity (STDP), are examined in depth. Results show that surrogate gradient-trained SNNs closely approximate ANN accuracy (within 1-2%), with faster convergence by the 20th epoch and latency as low as 10 milliseconds. Converted SNNs also achieve competitive performance but require higher spike counts and longer simulation windows. STDP-based SNNs, though slower to converge, exhibit the lowest spike counts and energy consumption (as low as 5 millijoules per inference), making them optimal for unsupervised and low-power tasks. These findings reinforce the suitability of SNNs for energy-constrained, latency-sensitive, and adaptive applications such as robotics, neuromorphic vision, and edge AI systems. While promising, challenges persist in hardware standardization and scalable training. This study concludes that SNNs, with further refinement, are poised to propel the next phase of neuromorphic computing.
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
- Europe > Ireland > Munster > County Kerry > Killarney (0.04)
- (9 more...)
- Energy (1.00)
- Health & Medicine > Therapeutic Area > Neurology (0.47)
- Education > Curriculum > Subject-Specific Education (0.46)
Supplementary Material Information Geometry of the Retinal Representation ManifoldXuehao Ding
Further experimental details are described in Ref. [4]. Each spatiotemporal stimulus spanned over 400 ms corresponding to the retinal integration timescale. Figure 1: (a) The log-likelihood of empirical data for each PMF averaged over cells. Black line is the identity line. The central 20 20 arrays are shown.